skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Liu, Mengyu"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Safe Reinforcement Learning (safe RL) has been widely used in safety-critical cyber-physical systems (CPS) to achieve task goals while satisfying safety constraints. Analyzing vulnerabilities that can be exploited to violate safety (i.e., safety-violated vulnerabilities) is crucial for understanding and improving the robustness of safe RL policies in CPS. However, existing works are inadequate for addressing such vulnerabilities, as they either focus on vulnerabilities that merely degrade task performance (rather than causing safety violations) or rely on strong assumptions about an adversary’s capability (e.g., requiring explicit knowledge of the safety constraints). This paper aims to bridge this gap by studying safety-violated vulnerabilities of safe RL in CPS without requiring prior knowledge of the underlying safety constraints. To this end, we propose a novel adversarial framework based on Signal Temporal Logic (STL) mining. The framework first mines STL formulas to uncover the implicit safety constraints of a safe RL policy, and then synthesizes perturbation attacks that violate these constraints. The generated attacks can effectively and efficiently induce safety violations by adapting perturbations and identifying critical time intervals for applying them. We conduct extensive experiments across multiple CPS environments, and the results demonstrate the effectiveness and efficiency of our method. 
    more » « less
  2. Safe reinforcement learning (safe RL) has been applied to synthesize control policies that maximize task rewards while adhering to safety constraints within simulated secure cyber-physical systems. However, the vulnerability of safe RL to adversarial attacks remains largely unexplored. We argue that understanding the safety vulnerabilities of learned control policies is crucial for ensuring true safety in real-world scenarios. To address this gap, we first formally define the safe RL problem with formal language (Signal temporal logic), and demonstrate that even optimal policies are susceptible to observation perturbations. We then introduce novel safety violation attacks that exploit adversarial models trained with reversed safety constraints to induce unsafe behaviors. Lastly, through both theoretical analysis and experimental results, we demonstrate that our approach is more effective at violating safety constraints than existing adversarial RL methods, which primarily focus on reducing task rewards rather than compromising safety. 
    more » « less
  3. There are various applications of Cyber-Physical systems (CPSs) that are life-critical where failure or malfunction can result in significant harm to human life, the environment, or substantial economic loss. Therefore, it is important to ensure their reliability, security, and robustness to the attacks. However, there is no widely used toolbox to simulate CPS and target security problems, especially the simulation of sensor attacks and defense strategies against them. In this work, we introduce our toolbox CPSim, a user-friendly simulation toolbox for security problems in CPS. CPSim aims to simulate common sensor attacks and countermeasures to these sensor attacks. We have implemented bias attacks, delay attacks, and replay attacks. Additionally, we have implemented various recovery-based methods against sensor attacks. The sensor attacks and recovery methods configurations can be customized with the given APIs. CPSim has built-in numerical simulators and various implemented benchmarks. Moreover, CPSim is compatible with other external simulators and can be deployed on a real testbed for control purposes.1 
    more » « less
  4. Cyber-Physical Systems(CPS) are the integration of sensing, control, computation, and networking with physical components and infrastructure connected by the internet. The autonomy and reliability are enhanced by the recent development of safe reinforcement learning (safe RL). However, the vulnerability of safe RL to adversarial conditions has received minimal exploration. In order to truly ensure safety in physical world applications, it is crucial to understand and address these potential safety weaknesses in learned control policies. In this work, we demonstrate a novel attack to violate safety that induces unsafe behaviors by adversarial models trained using reversed safety constraints. The experiment results show that the proposed method is more effective than existing works. 
    more » « less
  5. Industries are embracing information technology and constructing more robust machines known as Cyber-Physical Systems(CPS) to automate processes. CPSs are envisioned to be pervasive, coordinating, and integrating computation, sensing, actuation, and physical processes. CPSs have various applications in life-critical scenarios, where their performance and reliability can have direct impacts on human safety and well-being. However, CPSs are vulnerable to malicious attacks, and researchers have developed detectors to identify such attacks in different contexts. Surprisingly, little work has been done to detect attacks on the actuators of CPS. Furthermore, actuators face a high risk of optimal hidden attacks designed by powerful attackers, which can push them into an unsafe state without detection. To the best of our knowledge, no such attacks on actuators have been developed yet. In this paper, we design an optimal hidden attack for actuators and evaluate its effectiveness. First, we develop a mathematical model for actuators and then create a linear program for convex optimization. Second, we solve the optimization problem and simulate the optimal attack. 
    more » « less